24 research outputs found
Mitigating Bias in Gender, Age and Ethnicity Classification: a Multi-Task Convolution Neural Network Approach
International audienceThis work explores joint classification of gender, age and race. Specifically, we here propose a Multi-Task Convolution Neural Network (MTCNN) employing joint dynamic loss weight adjustment towards classification of named soft biometrics, as well as towards mitigation of soft biometrics related bias. The proposed algorithm achieves promising results on the UTKFace and the Bias Estimation in Face Analytics (BEFA) datasets and was ranked first in the the BEFA Challenge of the European Conference of Computer Vision (ECCV) 2018
Towards Person Identification and Re-identification with Attributes
Abstract. Visual identification of an individual in a crowded environ-ment observed by a distributed camera network is critical to a variety of tasks including commercial space management, border control, and crime prevention. Automatic re-identification of a human from public space CCTV video is challenging due to spatiotemporal visual feature varia-tions and strong visual similarity in people’s appearance, compounded by low-resolution and poor quality video data. Relying on re-identification using a probe image is limiting, as a linguistic description of an individ-ual’s profile may often be the only available cues. In this work, we show how mid-level semantic attributes can be used synergistically with low-level features for both identification and re-identification. Specifically, we learn an attribute-centric representation to describe people, and a met-ric for comparing attribute profiles to disambiguate individuals. This differs from existing approaches to re-identification which rely purely on bottom-up statistics of low-level features: it allows improved robustness to view and lighting; and can be used for identification as well as re-identification. Experiments demonstrate the flexibility and effectiveness of our approach compared to existing feature representations when ap-plied to benchmark datasets.
Extraction of bodily features for gait recognition and gait attractiveness evaluation
This is the author's accepted manuscript. The final publication is available at Springer via
http://dx.doi.org/10.1007/s11042-012-1319-2. Copyright @ 2012 Springer.Although there has been much previous research on which bodily features are most important in gait analysis, the questions of which features should be extracted from gait, and why these features in particular should be extracted, have not been convincingly answered. The primary goal of the study reported here was to take an analytical approach to answering these questions, in the context of identifying the features that are most important for gait recognition and gait attractiveness evaluation. Using precise 3D gait motion data obtained from motion capture, we analyzed the relative motions from different body segments to a root marker (located on the lower back) of 30 males by the fixed root method, and compared them with the original motions without fixing root. Some particular features were obtained by principal component analysis (PCA). The left lower arm, lower legs and hips were identified as important features for gait recognition. For gait attractiveness evaluation, the lower legs were recognized as important features.Dorothy Hodgkin Postgraduate Award and HEFCE
Is gender encoded in the smile? A computational framework for the analysis of the smile driven dynamic face for gender recognition
YesAutomatic gender classification has become a topic of great interest to the visual computing research community in recent
times. This is due to the fact that computer-based automatic gender recognition has multiple applications including, but not
limited to, face perception, age, ethnicity, identity analysis, video surveillance and smart human computer interaction. In this
paper, we discuss a machine learning approach for efficient identification of gender purely from the dynamics of a person’s
smile. Thus, we show that the complex dynamics of a smile on someone’s face bear much relation to the person’s gender.
To do this, we first formulate a computational framework that captures the dynamic characteristics of a smile. Our dynamic
framework measures changes in the face during a smile using a set of spatial features on the overall face, the area of the
mouth, the geometric flow around prominent parts of the face and a set of intrinsic features based on the dynamic geometry
of the face. This enables us to extract 210 distinct dynamic smile parameters which form as the contributing features for
machine learning. For machine classification, we have utilised both the Support Vector Machine and the k-Nearest Neighbour
algorithms. To verify the accuracy of our approach, we have tested our algorithms on two databases, namely the CK+ and the
MUG, consisting of a total of 109 subjects. As a result, using the k-NN algorithm, along with tenfold cross validation, for
example, we achieve an accurate gender classification rate of over 85%. Hence, through the methodology we present here,
we establish proof of the existence of strong indicators of gender dimorphism, purely in the dynamics of a person’s smile
Comparing methods for assessment of facial dynamics in patients with major neurocognitive disorders
International audienceAssessing facial dynamics in patients with major neurocogni-tive disorders and specifically with Alzheimers disease (AD) has shown to be highly challenging. Classically such assessment is performed by clinical staff, evaluating verbal and non-verbal language of AD-patients, since they have lost a substantial amount of their cognitive capacity, and hence communication ability. In addition, patients need to communicate important messages, such as discomfort or pain. Automated methods would support the current healthcare system by allowing for telemedicine, i.e., lesser costly and logistically inconvenient examination. In this work we compare methods for assessing facial dynamics such as talking, singing, neutral and smiling in AD-patients, captured during music mnemotherapy sessions. Specifically, we compare 3D Con-vNets, Very Deep Neural Network based Two-Stream ConvNets, as well as Improved Dense Trajectories. We have adapted these methods from prominent action recognition methods and our promising results suggest that the methods generalize well to the context of facial dynamics. The Two-Stream ConvNets in combination with ResNet-152 obtains the best performance on our dataset, capturing well even minor facial dynamics and has thus sparked high interest in the medical community
FEMALE FACIAL AESTHETICS BASED ON SOFT BIOMETRICS AND PHOTO-QUALITY
ABSTRACT In this work we study the connection between subjective evaluation of facial aesthetics and selected objective parameters based on photo-quality and facial soft biometrics. The approach is novel in that it jointly considers both previous results on photo quality and beauty assessment, as well as it incorporates non-permanent facial characteristics and expressions in the context of female facial aesthetics. This study helps us understand the role of this specific set of features in affecting the way humans perceive facial images. Based on the above objective parameters, we further construct a simple linear metric that hints modifiable parameters for aesthetics enhancement, as well as tunes soft biometric systems that would seek to predict the way humans perceive facial aesthetics. Index Terms-Facial aesthetics, facial beauty, soft biometrics, image quality assessment
Identifying customer behaviour and dwell time using soft biometrics
In a commercial environment, it is advantageous to know how long it takes customers to move between different regions, how long they spend in each region, and where they are likely to go as they move from one location to another. Presently, these measures can only be determined manually, or through the use of hardware tags (i.e. RFID). Soft biometrics are characteristics that can be used to describe, but not uniquely identify an individual. They include traits such as height, weight, gender, hair, skin and clothing colour. Unlike traditional biometrics, soft biometrics can be acquired by surveillance cameras at range without any user cooperation. While these traits cannot provide robust authentication, they can be used to provide identification at long range, and aid in object tracking and detection in disjoint camera networks. In this chapter we propose using colour, height and luggage soft biometrics to determine operational statistics relating to how people move through a space. A novel average soft biometric is used to locate people who look distinct, and these people are then detected at various locations within a disjoint camera network to gradually obtain operational statistic